170 research outputs found

    Propiedades de Calidad de Servicio en el Descubrimiento de Recursos Grid

    Get PDF
    Uno de los problemas abiertos en el contexto de las Arquitecturas Orientadas a Servicios es del descubrimiento de recursos y/o servicios adecuados para llevar a cabo una tarea determinada. Los proveedores de información Grid básicamente ofrecen información funcional sobre los recursos Grid que monitorizan, por lo que los modelos de información Grid básicamente representan esta información sintáctica, y los consumidores de información Grid usan normalmente dichas propiedades funcionales para seleccionar recursos. En la práctica, muchos trabajos se reinician debido a fallos en los recursos, aunque existen iniciativas que tratan de usar técnicas aisladas para manejar algunas propiedades de calidad de servicio. En el presente artículo se propone un nuevo enfoque para modelar recursos Grid junto con propiedades de calidad de servicio. Por un lado, este modelo está basado en una ontología desarrollada para integrar los modelos existentes tanto a nivel de representación de información Grid como de calidad de servicio en general. Por otro lado, también propone la creación de un sistema de medida - actualmente en desarrollo - para algunas propiedades de calidad de servicio (disponibilidad, rendimiento y fiabilidad)

    Deep Multi-Segmentation Approach for the Joint Classification and Segmentation of the Retinal Arterial and Venous Trees in Color Fundus Images

    Get PDF
    Presented at the 4th XoveTIC Conference, A Coruña, Spain, 7–8 October 2021.[Abstract] The analysis of the retinal vasculature represents a crucial stage in the diagnosis of several diseases. An exhaustive analysis involves segmenting the retinal vessels and classifying them into veins and arteries. In this work, we present an accurate approach, based on deep neural networks, for the joint segmentation and classification of the retinal veins and arteries from color fundus images. The presented approach decomposes this joint task into three related subtasks: the segmentation of arteries, veins and the whole vascular tree. The experiments performed show that our method achieves competitive results in the discrimination of arteries and veins, while clearly enhancing the segmentation of the different structures. Moreover, unlike other approaches, our method allows for the straightforward detection of vessel crossings, and preserves the continuity of the arterial and venous vascular trees at these locations.This work was funded by Instituto de Salud Carlos III, Government of Spain, and the European Regional Development Fund (ERDF) of the European Union (EU) through the DTS18/00136 research project; Ministerio de Ciencia e Innovación, Government of Spain, through the RTI2018-095894-B-I00 and PID2019-108435RB-I00 research projects; Axencia Galega de Innovación (GAIN), Xunta de Galicia, ref. IN845D 2020/38; Xunta de Galicia and European Social Fund (ESF) of the EU through the predoctoral grant contracts ED481A-2017/328 and ED481A 2021/140; Consellería de Cultura, Educación e Universidade, Xunta de Galicia, through Grupos de Referencia Competitiva, grant ref. ED431C 2020/24; CITIC, Centro de Investigación de Galicia ref. ED431G 2019/01, is funded by Consellería de Educación, Universidade e Formación Profesional, Xunta de Galicia, through the ERDF (80%) and Secretaría Xeral de Universidades (20%)Xunta de Galicia; IN845D 2020/38Xunta de Galicia; ED481A-2017/328Xunta de Galicia; ED481A 2021/140Xunta de Galicia; ED431C 2020/24Xunta de Galicia; ED431G 2019/0

    Paired and Unpaired Deep Generative Models on Multimodal Retinal Image Reconstruction

    Get PDF
    [Abstract] This work explores the use of paired and unpaired data for training deep neural networks in the multimodal reconstruction of retinal images. Particularly, we focus on the reconstruction of fluorescein angiography from retinography, which are two complementary representations of the eye fundus. The performed experiments allow to compare the paired and unpaired alternatives.Instituto de Salud Carlos III; DTS18/00136Ministerio de Ciencia, Innovación y Universidades; DPI2015-69948-RMinisterio de Ciencia, Innovación y Universidades; RTI2018-095894-B-I00Xunta de Galicia; ED431G/01Xunta de Galicia; ED431C 2016-047Xunta de Galicia; ED481A-2017/328

    Color Fundus Image Registration Using a Learning-Based Domain-Specific Landmark Detection Methodology

    Get PDF
    Financiado para publicación en acceso aberto: Universidade da Coruña/CISUG[Abstract] Medical imaging, and particularly retinal imaging, allows to accurately diagnose many eye pathologies as well as some systemic diseases such as hypertension or diabetes. Registering these images is crucial to correctly compare key structures, not only within patients, but also to contrast data with a model or among a population. Currently, this field is dominated by complex classical methods because the novel deep learning methods cannot compete yet in terms of results and commonly used methods are difficult to adapt to the retinal domain. In this work, we propose a novel method to register color fundus images based on previous works which employed classical approaches to detect domain-specific landmarks. Instead, we propose to use deep learning methods for the detection of these highly-specific domain-related landmarks. Our method uses a neural network to detect the bifurcations and crossovers of the retinal blood vessels, whose arrangement and location are unique to each eye and person. This proposal is the first deep learning feature-based registration method in fundus imaging. These keypoints are matched using a method based on RANSAC (Random Sample Consensus) without the requirement to calculate complex descriptors. Our method was tested using the public FIRE dataset, although the landmark detection network was trained using the DRIVE dataset. Our method provides accurate results, a registration score of 0.657 for the whole FIRE dataset (0.908 for category S, 0.293 for category P and 0.660 for category A). Therefore, our proposal can compete with complex classical methods and beat the deep learning methods in the state of the art.This research was funded by Instituto de Salud Carlos III, Government of Spain, DTS18/00 136 research project; Ministerio de Ciencia e Innovación y Universidades, Government of Spain, RTI2018-095 894-B-I00 research project; Consellería de Cultura, Educación e Universidade, Xunta de Galicia through the predoctoral grant contract ref. ED481A 2021/147 and Grupos de Referencia Competitiva, grant ref. ED431C 2020/24; CITIC, Centro de Investigación de Galicia ref. ED431G 2019/01, receives financial support from Consellería de Educación, Universidade e Formación Profesional, Xunta de Galicia, through the ERDF (80%) and Secretaría Xeral de Universidades (20%). The funding institutions had no involvement in the study design, in the collection, analysis and interpretation of data; in the writing of the manuscript; or in the decision to submit the manuscript for publication. Funding for open access charge: Universidade da Coruña/CISUGXunta de Galicia; ED481A 2021/147Xunta de Galicia; ED431C 2020/24Xunta de Galicia; ED431G 2019/0

    Retinal Microaneurysms Detection Using Adversarial Pre-training With Unlabeled Multimodal Images

    Get PDF
    Financiado para publicación en acceso aberto: Universidade da Coruña/CISUG[Abstract] The detection of retinal microaneurysms is crucial for the early detection of important diseases such as diabetic retinopathy. However, the detection of these lesions in retinography, the most widely available retinal imaging modality, remains a very challenging task. This is mainly due to the tiny size and low contrast of the microaneurysms in the images. Consequently, the automated detection of microaneurysms usually relies on extensive ad-hoc processing. In this regard, although microaneurysms can be more easily detected using fluorescein angiography, this alternative imaging modality is invasive and not adequate for regular preventive screening. In this work, we propose a novel deep learning methodology that takes advantage of unlabeled multimodal image pairs for improving the detection of microaneurysms in retinography. In particular, we propose a novel adversarial multimodal pre-training consisting in the prediction of fluorescein angiography from retinography using generative adversarial networks. This pre-training allows learning about the retina and the microaneurysms without any manually annotated data. Additionally, we also propose to approach the microaneurysms detection as a heatmap regression, which allows an efficient detection and precise localization of multiple microaneurysms. To validate and analyze the proposed methodology, we perform an exhaustive experimentation on different public datasets. Additionally, we provide relevant comparisons against different state-of-the-art approaches. The results show a satisfactory performance of the proposal, achieving an Average Precision of 64.90%, 31.36%, and 33.55% in the E-Ophtha, ROC, and DDR public datasets. Overall, the proposed approach outperforms existing deep learning alternatives while providing a more straightforward detection method that can be effectively applied to raw unprocessed retinal images.This work is supported by Instituto de Salud Carlos III, Government of Spain, and the European Regional Development Fund (ERDF) of the European Union (EU) through the DTS18/00136 research project; Ministerio de Ciencia e Innovación, Government of Spain, through the RTI2018-095894-B-I00 and PID2019-108435RB-I00 research projects; Xunta de Galicia, Spain and the European Social Fund (ESF) of the EU through the predoctoral grant contract ref. ED481A-2017/328; Consellería de Cultura, Educación e Universidade, Xunta de Galicia, through Grupos de Referencia Competitiva, grant ref. ED431C 2020/24. CITIC, Centro de Investigación de Galicia ref. ED431G 2019/01, receives financial support from Consellería de Cultura, Educación e Universidade, Xunta de Galicia , through the ERDF (80%) and Secretaría Xeral de Universidades (20%). Funding for open access charge: Universidade da Coruña/CISUGXunta de Galicia; ED481A-2017/328Xunta de Galicia; ED431C 2020/24Xunta de Galicia; ED431G 2019/0

    Self-Supervised Multimodal Reconstruction Pre-training for Retinal Computer-Aided Diagnosis

    Get PDF
    Financiado para publicación en acceso aberto: Universidade da Coruña/CISUG[Abstract] Computer-aided diagnosis using retinal fundus images is crucial for the early detection of many ocular and systemic diseases. Nowadays, deep learning-based approaches are commonly used for this purpose. However, training deep neural networks usually requires a large amount of annotated data, which is not always available. In practice, this issue is commonly mitigated with different techniques, such as data augmentation or transfer learning. Nevertheless, the latter is typically faced using networks that were pre-trained on additional annotated data. An emerging alternative to the traditional transfer learning source tasks is the use of self-supervised tasks that do not require manually annotated data for training. In that regard, we propose a novel self-supervised visual learning strategy for improving the retinal computer-aided diagnosis systems using unlabeled multimodal data. In particular, we explore the use of a multimodal reconstruction task between complementary retinal imaging modalities. This allows to take advantage of existent unlabeled multimodal data in the medical domain, improving the diagnosis of different ocular diseases with additional domain-specific knowledge that does not rely on manual annotation. To validate and analyze the proposed approach, we performed several experiments aiming at the diagnosis of different diseases, including two of the most prevalent impairing ocular disorders: glaucoma and age-related macular degeneration. Additionally, the advantages of the proposed approach are clearly demonstrated in the comparisons that we perform against both the common fully-supervised approaches in the literature as well as current self-supervised alternatives for retinal computer-aided diagnosis. In general, the results show a satisfactory performance of our proposal, which improves existing alternatives by leveraging the unlabeled multimodal visual data that is commonly available in the medical field.This work is supported by Instituto de Salud Carlos III, Government of Spain, and the European Regional Development Fund (ERDF) of the European Union (EU) through the DTS18/00136 research project; Ministerio de Ciencia e Innovación, Government of Spain, through the RTI2018-095894-B-I00 and PID2019-108435RB-I00 research projects; Xunta de Galicia and the European Social Fund (ESF) of the EU through the predoctoral grant contract ref. ED481A-2017/328; Consellería de Cultura, Educación e Universidade, Xunta de Galicia, through Grupos de Referencia Competitiva, grant ref. ED431C 2020/24. CITIC, Centro de Investigación de Galicia ref. ED431G 2019/01, receives financial support from Consellería de Educación, Universidade e Formación Profesional, Xunta de Galicia , through the ERDF (80%) and Secretaría Xeral de Universidades (20%)Xunta de Galicia; ED431C 2020/24Xunta de Galicia; ED431G 2019/01Xunta de Galicia; ED481A-2017/32

    End-To-End Multi-Task Learning for Simultaneous Optic Disc and Cup Segmentation and Glaucoma Classification in Eye Fundus Images

    Get PDF
    Financiado para publicación en acceso aberto: Universidade da Coruña/CISUG[Abstract] The automated analysis of eye fundus images is crucial towards facilitating the screening and early diagnosis of glaucoma. Nowadays, there are two common alternatives for the diagnosis of this disease using deep neural networks. One is the segmentation of the optic disc and cup followed by the morphological analysis of these structures. The other is to directly address the diagnosis as an image classification task. The segmentation approach presents the advantage of using pixel-level labels with precise morphological information for training. However, while this detailed training feedback is not available for the classification approach, in this case the image-level labels may allow the discovery of additional non-morphological cues that are also relevant for the diagnosis. In this work, we propose a novel multi-task approach for the simultaneous classification of glaucoma and segmentation of the optic disc and cup. This approach can improve the overall performance by taking advantage of both pixel-level and image-level labels during the network training. Additionally, the segmentation maps that are predicted together with the diagnosis allow the extraction of relevant biomarkers such as the cup-to-disc ratio. The proposed methodology presents two relevant technical novelties. First, a network architecture for simultaneous segmentation and classification that increases the number of shared parameters between both tasks. Second, a multi-adaptive optimization strategy that ensures that both tasks contribute similarly to the parameter updates during training, avoiding the use of loss weighting hyperparameters. To validate our proposal, an exhaustive experimentation was performed on the public REFUGE and DRISHTI-GS datasets. The results show that our proposal outperforms comparable multi-task baselines and is highly competitive with existing state-of-the-art approaches. Additionally, the provided ablation study shows that both the network architecture and the optimization approach are independently advantageous for multi-task learning.This work is supported by Instituto de Salud Carlos III, Government of Spain, and the European Regional Development Fund (ERDF) of the European Union (EU) through the DTS18/00136 research project; Ministerio de Ciencia e Innovación, Government of Spain, through the RTI2018-095894-B-I00 and PID2019-108435RB-I00 research projects; Axencia Galega de Innovación (GAIN), Spain, Xunta de Galicia, through grant ref. IN845D 2020/38; Xunta de Galicia and the European Social Fund (ESF) of the EU through the predoctoral contract ref. ED481A-2017/328; Consellería de Cultura, Educación e Universidade, Xunta de Galicia, Spain, through Grupos de Referencia Competitiva, grant ref. ED431C 2020/24. CITIC, Centro de Investigación de Galicia ref. ED431G 2019/01, receives financial support from Consellería de Cultura, Educación e Universidade, Xunta de Galicia, Spain, through the ERDF (80%) and Secretaría Xeral de Universidades (20%), Spain . Funding for open access charge: Universidade da Coruña/CISUG, Spain.Xunta de Galicia; IN845D 2020/38Xunta de Galicia; ED481A-2017/328Xunta de Galicia; ED431C 2020/24Xunta de Galicia; ED431G 2019/0

    Learning the Retinal Anatomy From Scarce Annotated Data Using Self-Supervised Multimodal Reconstruction

    Get PDF
    [Abstract] Deep learning is becoming the reference paradigm for approaching many computer vision problems. Nevertheless, the training of deep neural networks typically requires a significantly large amount of annotated data, which is not always available. A proven approach to alleviate the scarcity of annotated data is transfer learning. However, in practice, the use of this technique typically relies on the availability of additional annotations, either from the same or natural domain. We propose a novel alternative that allows to apply transfer learning from unlabelled data of the same domain, which consists in the use of a multimodal reconstruction task. A neural network trained to generate one image modality from another must learn relevant patterns from the images to successfully solve the task. These learned patterns can then be used to solve additional tasks in the same domain, reducing the necessity of a large amount of annotated data. In this work, we apply the described idea to the localization and segmentation of the most important anatomical structures of the eye fundus in retinography. The objective is to reduce the amount of annotated data that is required to solve the different tasks using deep neural networks. For that purpose, a neural network is pre-trained using the self-supervised multimodal reconstruction of fluorescein angiography from retinography. Then, the network is fine-tuned on the different target tasks performed on the retinography. The obtained results demonstrate that the proposed self-supervised transfer learning strategy leads to state-of-the-art performance in all the studied tasks with a significant reduction of the required annotations.This work is supported by Instituto de Salud Carlos III, Government of Spain, and the European Regional Development Fund (ERDF) of the European Union (EU) through the DTS18/00136 research project, and by Ministerio de Economía, Industria y Competitividad, Government of Spain, through the DPI2015-69948-R research project. The authors of this work also receive financial support from the ERDF and Xunta de Galicia (Spain) through Grupo de Referencia Competitiva, ref. ED431C 2016-047, and from the European Social Fund (ESF) of the EU and Xunta de Galicia (Spain) through the predoctoral grant contract ref. ED481A-2017/328. CITIC, Centro de Investigación de Galicia ref. ED431G 2019/01, receives financial support from Consellería de Educación, Universidade e Formación Profesional, Xunta de Galicia (Spain) , through the ERDF (80%) and Secretaría Xeral de Universidades (20%)Xunta de Galicia; ED431C 2016-047Xunta de Galicia ; ED481A-2017/328Xunta de Galicia; ED431G 2019/0

    Self-Supervised Multimodal Reconstruction of Retinal Images Over Paired Datasets

    Get PDF
    [Abstract] Data scarcity represents an important constraint for the training of deep neural networks in medical imaging. Medical image labeling, especially if pixel-level annotations are required, is an expensive task that needs expert intervention and usually results in a reduced number of annotated samples. In contrast, extensive amounts of unlabeled data are produced in the daily clinical practice, including paired multimodal images from patients that were subjected to multiple imaging tests. This work proposes a novel self-supervised multimodal reconstruction task that takes advantage of this unlabeled multimodal data for learning about the domain without human supervision. Paired multimodal data is a rich source of clinical information that can be naturally exploited by trying to estimate one image modality from others. This multimodal reconstruction requires the recognition of domain-specific patterns that can be used to complement the training of image analysis tasks in the same domain for which annotated data is scarce. In this work, a set of experiments is performed using a multimodal setting of retinography and fluorescein angiography pairs that offer complementary information about the eye fundus. The evaluations performed on different public datasets, which include pathological and healthy data samples, demonstrate that a network trained for self-supervised multimodal reconstruction of angiography from retinography achieves unsupervised recognition of important retinal structures. These results indicate that the proposed self-supervised task provides relevant cues for image analysis tasks in the same domain.This work is supported by Instituto de Salud Carlos III, Government of Spain, and the European Regional Development Fund (ERDF) of the European Union (EU) through the DTS18/00136 research project, and by Ministerio de Economía, Industria y Competitividad, Government of Spain, through the DPI2015-69948-R research project. The authors of this work also receive financial support from the ERDF and Xunta de Galicia through Grupo de Referencia Competitiva, Ref. ED431C 2016-047, and from the European Social Fund (ESF) of the EU and Xunta de Galicia through the predoctoral grant contract Ref. ED481A-2017/328. CITIC, Centro de Investigación de Galicia Ref. ED431G 2019/01, receives financial support from Consellería de Educación, Universidade e Formación Profesional, Xunta de Galicia, through the ERDF (80%) and Secretaría Xeral de Universidades (20%)Xunta de Galicia; ED431C 2016-047Xunta de Galicia; ED481A-2017/328Xunta de Galicia; ED431G 2019/0

    Automatic Detection of Freshwater Phytoplankton Specimens in Conventional Microscopy Images

    Get PDF
    [Abstract] Water safety and quality can be compromised by the proliferation of toxin-producing phytoplankton species, requiring continuous monitoring of water sources. This analysis involves the identification and counting of these species which requires broad experience and knowledge. The automatization of these tasks is highly desirable as it would release the experts from tedious work, eliminate subjective factors, and improve repeatability. Thus, in this preliminary work, we propose to advance towards an automatic methodology for phytoplankton analysis in digital images of water samples acquired using regular microscopes. In particular, we propose a novel and fully automatic method to detect and segment the existent phytoplankton specimens in these images using classical computer vision algorithms. The proposed method is able to correctly detect sparse colonies as single phytoplankton candidates, thanks to a novel fusion algorithm, and is able to differentiate phytoplankton specimens from other image objects in the microscope samples (like minerals, bubbles or detritus) using a machine learning based approach that exploits texture and colour features. Our preliminary experiments demonstrate that the proposed method provides satisfactory and accurate results.This work is supported by the European Regional Development Fund (ERDF) of the European Union and Xunta de Galicia through Centro de Investigación del Sistema Universitario de Galicia, ref. ED431G 2019/01Xunta de Galicia; ED431G 2019/0
    corecore